Markov chains: Ergodicity, quasi-stationarity and asymmetry
نویسندگان
چکیده
منابع مشابه
Quasi-stationarity and quasi-ergodicity of General Markov Processes
In this paper we give some general, but easy-to-check, conditions guaranteeing the quasistationarity and quasi-ergodicity of Markov processes. We also present several classes of Markov processes satisfying our conditions. AMS 2010 Mathematics Subject Classification: Primary 60F25, 60J25G20; Secondary 60J35, 60J75
متن کاملSubgeometric ergodicity of Markov chains
When f ≡ 1, the f -norm is the total variation norm, which is denoted ‖μ‖TV. Assume that P is aperiodic positive Harris recurrent with stationary distribution π. Then the iterated kernels P(x, ·) converge to π. The rate of convergence of P(x, .) to π does not depend on the starting state x, but exact bounds may depend on x. Hence, it is of interest to obtain non uniform or quantitative bounds o...
متن کاملGeometric Ergodicity and Hybrid Markov Chains
Various notions of geometric ergodicity for Markov chains on general state spaces exist. In this paper, we review certain relations and implications among them. We then apply these results to a collection of chains commonly used in Markov chain Monte Carlo simulation algorithms, the so-called hybrid chains. We prove that under certain conditions, a hybrid chain will “inherit” the geometric ergo...
متن کاملDiscrete Time Markov Chains : Ergodicity Theory
Lecture 8: Discrete Time Markov Chains: Ergodicity Theory Announcements: 1. We handed out HW2 solutions and your homeworks in Friday’s recitation. I am handing out a few extras today. Please make sure you get these! 2. Remember that I now have office hours both: Wednesday at 3 p.m. and Thursday at 4 p.m. Please show up and ask questions about the lecture notes, not just the homework! No one cam...
متن کاملQuasi-stationarity of Discrete-time Markov Chains with Drift to Infinity
We consider a discrete-time Markov chain on the non-negative integers with drift to innnity and study the limiting behaviour of the state probabilities conditioned on not having left state 0 for the last time. Using a transformation, we obtain a dual Markov chain with an absorbing state such that absorption occurs with probability 1. We prove that the state probabilities of the original chain c...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: SCIENTIA SINICA Mathematica
سال: 2019
ISSN: 1674-7216
DOI: 10.1360/n012019-00069